299 research outputs found

    Generating random surfaces with desired autocorrelation length

    Get PDF
    A versatile surface processing method based on electrostatic deposition of particles and subsequent dry etching is shown to be able to tailor the autocorrelation length of a random surface by varying particle size and coverage. An explicit relation between final autocorrelation length, surface coverage of the particles, particle size, and etch depth is built. The autocorrelation length of the final surface closely follows a power law decay with particle coverage, the most significant processing parameter. Experimental results on silicon substrates agree reasonably well with model predictions

    Adversarial Defense via Neural Oscillation inspired Gradient Masking

    Full text link
    Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility. As they are widely deployed in neuromorphic devices for low-power brain-inspired computing, security issues become increasingly important. However, compared to deep neural networks (DNNs), SNNs currently lack specifically designed defense methods against adversarial attacks. Inspired by neural membrane potential oscillation, we propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs. Our experiments show that SNNs with neural oscillation neurons have better resistance to adversarial attacks than ordinary SNNs with LIF neurons on kinds of architectures and datasets. Furthermore, we propose a defense method that changes model's gradients by replacing the form of oscillation, which hides the original training gradients and confuses the attacker into using gradients of 'fake' neurons to generate invalid adversarial samples. Our experiments suggest that the proposed defense method can effectively resist both single-step and iterative attacks with comparable defense effectiveness and much less computational costs than adversarial training methods on DNNs. To the best of our knowledge, this is the first work that establishes adversarial defense through masking surrogate gradients on SNNs

    A noise based novel strategy for faster SNN training

    Full text link
    Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations. For ANN-to-SNN conversion, it requires a long inference time to approximate the accuracy of ANN, thus diminishing the benefits of SNN. With spike-based BP, training high-precision SNNs typically consumes dozens of times more computational resources and time than their ANN counterparts. In this paper, we propose a novel SNN training approach that combines the benefits of the two methods. We first train a single-step SNN(T=1) by approximating the neural potential distribution with random noise, then convert the single-step SNN(T=1) to a multi-step SNN(T=N) losslessly. The introduction of Gaussian distributed noise leads to a significant gain in accuracy after conversion. The results show that our method considerably reduces the training and inference times of SNNs while maintaining their high accuracy. Compared to the previous two methods, ours can reduce training time by 65%-75% and achieves more than 100 times faster inference speed. We also argue that the neuron model augmented with noise makes it more bio-plausible

    Method to Generate Surfaces with Desired Roughness Parameters

    Get PDF
    A surface engineering method based on the electrostatic deposition of microparticles and dry etching is described and shown to be able to independently tune both amplitude and spatial roughness parameters of the final surface. Statistical models were developed to connect process variables to the amplitude parameters (center line average and root-mean-square) and a spatial parameter (autocorrelation length) of the final surfaces. Process variables include particle coverage, which affects both amplitude and spatial roughness parameters, particle size, which affects only spatial parameters, and etch depth, which affects only amplitude parameters. Correlations between experimental data and model predictions are discussed

    Spiking sampling network for image sparse representation and dynamic vision sensor data compression

    Full text link
    Sparse representation has attracted great attention because it can greatly save storage resources and find representative features of data in a low-dimensional space. As a result, it may be widely applied in engineering domains including feature extraction, compressed sensing, signal denoising, picture clustering, and dictionary learning, just to name a few. In this paper, we propose a spiking sampling network. This network is composed of spiking neurons, and it can dynamically decide which pixel points should be retained and which ones need to be masked according to the input. Our experiments demonstrate that this approach enables better sparse representation of the original image and facilitates image reconstruction compared to random sampling. We thus use this approach for compressing massive data from the dynamic vision sensor, which greatly reduces the storage requirements for event data

    Self-supervised Domain-agnostic Domain Adaptation for Satellite Images

    Full text link
    Domain shift caused by, e.g., different geographical regions or acquisition conditions is a common issue in machine learning for global scale satellite image processing. A promising method to address this problem is domain adaptation, where the training and the testing datasets are split into two or multiple domains according to their distributions, and an adaptation method is applied to improve the generalizability of the model on the testing dataset. However, defining the domain to which each satellite image belongs is not trivial, especially under large-scale multi-temporal and multi-sensory scenarios, where a single image mosaic could be generated from multiple data sources. In this paper, we propose an self-supervised domain-agnostic domain adaptation (SS(DA)2) method to perform domain adaptation without such a domain definition. To achieve this, we first design a contrastive generative adversarial loss to train a generative network to perform image-to-image translation between any two satellite image patches. Then, we improve the generalizability of the downstream models by augmenting the training data with different testing spectral characteristics. The experimental results on public benchmarks verify the effectiveness of SS(DA)2
    corecore